spatial location
Analytical Extraction of Conditional Sobol' Indices via Basis Decomposition of Polynomial Chaos Expansions
In uncertainty quantification, evaluating sensitivity measures under specific conditions (i.e., conditional Sobol' indices) is essential for systems with parameterized responses, such as spatial fields or varying operating conditions. Traditional approaches often rely on point-wise modeling, which is computationally expensive and may lack consistency across the parameter space. This paper demonstrates that for a pre-trained global Polynomial Chaos Expansion (PCE) model, the analytical conditional Sobol' indices are inherently embedded within its basis functions. By leveraging the tensor-product property of PCE bases, we reformulate the global expansion into a set of analytical coefficient fields that depend on the conditioning variables. Based on the preservation of orthogonality under conditional probability measures, we derive closed-form expressions for conditional variances and Sobol' indices. This framework bypasses the need for repetitive modeling or additional sampling, transforming conditional sensitivity analysis into a purely algebraic post-processing step. Numerical benchmarks indicate that the proposed method ensures physical coherence and offers superior numerical robustness and computational efficiency compared to conventional point-wise approaches.
- Asia > China > Shaanxi Province > Xi'an (0.05)
- Europe > France > Auvergne-Rhône-Alpes > Puy-de-Dôme > Clermont-Ferrand (0.04)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.05)
- Asia > India > NCT > Delhi (0.04)
- (10 more...)
- Research Report (0.46)
- Overview (0.46)
- Health & Medicine (1.00)
- Information Technology (0.67)
- Law > Environmental Law (0.46)
- North America > United States (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Italy > Sardinia (0.04)
- (2 more...)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.71)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
PolarMix SupplementalMaterial
Wefirst implement global augmentation approaches including random rotation and random scaling on two LiDAR scans separately and thenconcatenate themfortraining. The more copies the better segmentation performance as shown in ' 1, 2, 3' in the table, which indicates the effectiveness of the approach in enriching data distribution. In this section, we conducted experiments to analyze how PolarMix benefits LiDAR point cloud learning. As a comparison, PolarMix is more robust to the instance spatial location without much performance drop. PolarMix improves the robustness of the baseline clearly with respect to the angular variations of instances (i.e.
- North America > United States (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Vietnam > Hanoi > Hanoi (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > Germany (0.04)
- Asia > Singapore (0.04)
Neuronal Gaussian Process Regression
The brain takes uncertainty intrinsic to our world into account. For example, associating spatial locations with rewards requires to predict not only expected reward at new spatial locations but also its uncertainty to avoid catastrophic events and forage safely. A powerful and flexible framework for nonlinear regression that takes uncertainty into account in a principled Bayesian manner is Gaussian process (GP) regression. Here I propose that the brain implements GP regression and present neural networks (NNs) for it. First layer neurons, e.g.\ hippocampal place cells, have tuning curves that correspond to evaluations of the GP kernel. Output neurons explicitly and distinctively encode predictive mean and variance, as observed in orbitofrontal cortex (OFC) for the case of reward prediction. Because the weights of a NN implementing exact GP regression do not arise with biological plasticity rules, I present approximations to obtain local (anti-)Hebbian synaptic learning rules. The resulting neuronal network approximates the full GP well compared to popular sparse GP approximations and achieves comparable predictive performance.